ML Theory Lecture 4 Matus Telgarsky

نویسنده

  • Matus Telgarsky
چکیده

Currently in the course we are showing that we can approximate continuous functions on compact sets with piecewise constant functions; in other words, with boxes. We will finish this today. Remaining in the representation section we still have: polynomial fit of continuous functions, functions we can fit succinctly, and GANs. Remark 1.1 (Homework comment.). A brief comment about problem 2(c); it wasn’t stated clearly enough, so hardly anything was taken off, but it really wanted something about LIL and Hoeffding disagreeing. Something along the lines of there being a lower bound (anti-concentration) infinitely often sufficed. Note one funny thing we can do with Hoeffding. Hoeffding by default holds for a fixed n, but we can instantiate it for each (δn)n 1 with δn : δ/(n(n + 1)). Thus

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ML Theory Lecture 8 Matus Telgarsky

Let’s briefly recap where we are in the course. • We’re almost done with the “representation” part of the course. Today we’ll establish the key result in the “succinctness” portion: there exist small, deep networks which can not be approximated by shallow networks, even if they are huge. • The next class we’ll talk about GANs and other probability models, which will conclude this “representatio...

متن کامل

ML Theory Lecture 9

Today is the last lecture on representation. We’ve shown that neural nets can fit continuous functions, and also that they can fit some other functions succinctly (with small representation size), but we’ve only looked at univariate outputs! Today we’ll close the topic with a look at a much different representation problem: using machine learning models to approximate probability distributions!...

متن کامل

ML Theory Lecture 2 Matus

Coherence. In this problem we are considering a curious setup where past and future learning are not distinct; instead, vectors x ∈ Rd just keep coming, along with correct labels y ∈ {−1,+1}, and we can learn forever if we wish. Coherence will be provided in the following interesting way. There will be a fixed vector u ∈ Rd and scalar γ > 0 (“fixed” means: fixed across all time) so that every p...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017